Remapping Models for Scientific Computing via Graph and Hypergraph Partitioning

نویسندگان

  • B. Barla Cambazoglu
  • Cevdet Aykanat
چکیده

There are numerous parallel scientific computing applications in which the same computation is successively repeated over a problem instance for many times with different parameters. In most of these applications, although the initial task-to-processor mapping may be satisfactory in terms of both computational load balance and communication requirements, the quality of this initial mapping typically tends to deteriorate as the computational structure of the application or its parameters start to change while the computation progresses, thus reducing the efficiency of parallelization. A solution to this problem is to rebalance the load distribution of the processors whenever needed by rearranging the assignment of tasks to processors via a process known as remapping. For an efficient parallelization, novel remapping models are needed. These models should not only rebalance the load distribution in the parallel system but also be able to minimize the possible overheads that may be introduced due to the remapping process. Although it heavily depends on the nature of the problem, most typical remapping overheads are incurred due to task migration, data replication, and the remapping computation itself. In the literature, various combinatorial models, based on graph partitioning (GP) and hypergraph partitioning (HP) are proposed as solutions to the remapping problems arising in different types of applications. The proposed models may be broadly classified into two as scratch-remap [1], [2], [3] [4], [6] or diffusion-based [5], [6], [7] models. Typically, scratch-remap models work in two phases. In the first phase, tasks are partitioned into parts such that each part has almost equal computational load. In the second phase, parts are mapped to processors such that the overhead of remapping the tasks to the processors is as low as possible. Diffusion-based models, on the other hand, try to move the tasks from heavily loaded processors to lightly loaded processors, taking both load balancing and minimization of the remapping overhead into consideration in a single phase. In both types of models, the tasks and the interaction among them are represented as a graph or a hypergraph, and partitioning heuristics are employed for remapping. In this work, we elaborate on this type of remapping, based on combinatorial algorithms, by discussing our recently proposed GP-based and HP-based remapping models [1], [2] and using the direct volume rendering (DVR) of unstructured grids as a representative parallel application. In a typical DVR application, the purpose is to map scalar or vectorial values defined throughout data cells in a 3D object space (OS) to color information over pixels in a 2D image space (IS). Both OS and IS parallelizations are possible. In OS parallelization, the data cells forming a volume are partitioned into subvolumes, one per processor. Each processor locally renders the data cells in its subvolume and produces a full-screen but partial image. The partial images are later combined into a whole image via global pixel merging. In IS-parallelization, the pixels over the screen are partitioned into subscreens, one per processor. Each processor locally renders its subscreen and produces a small but complete portion of the final image. In parallel DVR, visualization parameters (such as view point and viewing direction) determine the computational structure in rendering since they affect both the rendering load distribution of processors and the interaction between OS and IS primitives. In successively visualizing a data set with different visualization parameters, existing partitions turn into poor partitions that cause a

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Permuting Sparse Rectangular Matrices into Block-Diagonal Form

We investigate the problem of permuting a sparse rectangular matrix into blockdiagonal form. Block-diagonal form of a matrix grants an inherent parallelism for solving the deriving problem, as recently investigated in the context of mathematical programming, LU factorization, and QR factorization. To represent the nonzero structure of a matrix, we propose bipartite graph and hypergraph models t...

متن کامل

An Evaluation of the Zoltan Parallel Graph and Hypergraph Partitioners

Graph partitioning is an important and well studied problem in combinatorial scientific computing, and is commonly used to reduce communication in parallel computing. Different models (graph, hypergraph) and objectives (edge cut, boundary vertices) have been proposed. For large problems, the partitioning itself must be done in parallel. Several software packages, such as ParMetis, PT-Scotch and...

متن کامل

Parallel partitioning with Zoltan: Is hypergraph partitioning worth it?

Graph partitioning is an important and well studied problem in combinatorial scientific computing, and is commonly used to reduce communication in parallel computing. Different models (graph, hypergraph) and objectives (edge cut, boundary vertices) have been proposed. Hypergraph partitioning has become increasingly popular over the last decade. Its main strength is that it accurately captures c...

متن کامل

k-way Hypergraph Partitioning via n-Level Recursive Bisection

We develop a multilevel algorithm for hypergraph partitioning that contracts the vertices one at a time. Using several caching and lazy-evaluation techniques during coarsening and refinement, we reduce the running time by up to two-orders of magnitude compared to a naive n-level algorithm that would be adequate for ordinary graph partitioning. The overall performance is even better than the wid...

متن کامل

Partitioning Hypergraphs in Scientific Computing Applications through Vertex Separators on Graphs

The modeling flexibility provided by hypergraphs has drawn a lot of interest from the combinatorial scientific community, leading to novel models and algorithms, their applications, and development of associated tools. Hypergraphs are now a standard tool in combinatorial scientific computing. The modeling flexibility of hypergraphs however, comes at a cost: algorithms on hypergraphs are inheren...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006